93 research outputs found

    Accuracy Assessment of forecasting services

    Get PDF
    English: A service system is a dynamic configuration of people, technologies, organisations and shared information that create and deliver value to customers and other stakeholders [1]. The following cases are examples of customers receiving a service: taking a bus to go somewhere, or going to a restaurant to have a meal, or for a small IT (information technology) company, contracting a service to a bigger one in order to save costs and time. Service-oriented architecture (SOA) has become more popular during last years. Basically, this emerging development paradigm allows service providers to offer loosely coupled services. These services are normally only owned by the providers. As a result, the service user or client does not have to worry about the development, maintenance, infrastructure, or any other issue of how the service is working. To sum up, the user just has to find and choose the proper service. On the one hand, it presents several advantages. Firstly, common functionality can be contracted as a service in order to be able to focus on the own core missions. Secondly, it decreases the cost, since it is cheaper to contract a service than creating it yourself. Thirdly, clients take benefit of provider’s latest technologies. On the other hand, there is one big drawback: lack of trust. When you contract a service, you lose the direct control, the provider has access to your own data, you depend on him, and you experiment delays since your functionality is not working in-home. That is why the user has to decide previously which service is the most appropriate for his needs. Each client has different needs: quality (it varies among services), reputation (a famous or recommended provider usually gives more confidence), speed (agreements not to break thresholds), security (contract and trust in the provider), personalisation (preferential treatment from the provider), and locality (law is not the same in all countries). Therefore, a customer needs to know about the best service(s).Among all kind of services, we concentrate on forecasting services. Forecasting services show in advance a condition or occurrence about the future. There are plenty of domains: weather forecasts, stock market prices, results in betting shops, elections… Let us see a domain which is really familiar to all of us: weather forecast. When we are planning to travel, going somewhere or just deciding what to wear first thing in the morning, we wonder about weather conditions. To make these decisions, we check the weather forecast on TV news, a thermometer, or on a web site. However, sometimes we check several predictions and they do not agree. Which one will be the most accurate? Our goal in this master thesis is to assess the accuracy of these forecasting services in order to help prospective users to choose the best one according to their needs. To do it, we are going to compare forecast predictions with actual real observations

    Mercury: using the QuPreSS reference model to evaluate predictive services

    Get PDF
    Nowadays, lots of service providers offer predictive services that show in advance a condition or occurrence about the future. As a consequence, it becomes necessary for service customers to select the predictive service that best satisfies their needs. The QuPreSS reference model provides a standard solution for the selection of predictive services based on the quality of their predictions. QuPreSS has been designed to be applicable in any predictive domain (e.g., weather forecasting, economics, and medicine). This paper presents Mercury, a tool based on the QuPreSS reference model and customized to the weather forecast domain. Mercury measures weather predictive services' quality, and automates the context-dependent selection of the most accurate predictive service to satisfy a customer query. To do so, candidate predictive services are monitored so that their predictions can be eventually compared to real observations obtained from a trusted source. Mercury is a proof-of-concept of QuPreSS that aims to show that the selection of predictive services can be driven by the quality of their predictions. Throughout the paper, we show how Mercury was built from the QuPreSS reference model and how it can be installed and used.Peer ReviewedPostprint (author's final draft

    Accuracy Assessment of forecasting services

    Get PDF
    English: A service system is a dynamic configuration of people, technologies, organisations and shared information that create and deliver value to customers and other stakeholders [1]. The following cases are examples of customers receiving a service: taking a bus to go somewhere, or going to a restaurant to have a meal, or for a small IT (information technology) company, contracting a service to a bigger one in order to save costs and time. Service-oriented architecture (SOA) has become more popular during last years. Basically, this emerging development paradigm allows service providers to offer loosely coupled services. These services are normally only owned by the providers. As a result, the service user or client does not have to worry about the development, maintenance, infrastructure, or any other issue of how the service is working. To sum up, the user just has to find and choose the proper service. On the one hand, it presents several advantages. Firstly, common functionality can be contracted as a service in order to be able to focus on the own core missions. Secondly, it decreases the cost, since it is cheaper to contract a service than creating it yourself. Thirdly, clients take benefit of provider’s latest technologies. On the other hand, there is one big drawback: lack of trust. When you contract a service, you lose the direct control, the provider has access to your own data, you depend on him, and you experiment delays since your functionality is not working in-home. That is why the user has to decide previously which service is the most appropriate for his needs. Each client has different needs: quality (it varies among services), reputation (a famous or recommended provider usually gives more confidence), speed (agreements not to break thresholds), security (contract and trust in the provider), personalisation (preferential treatment from the provider), and locality (law is not the same in all countries). Therefore, a customer needs to know about the best service(s).Among all kind of services, we concentrate on forecasting services. Forecasting services show in advance a condition or occurrence about the future. There are plenty of domains: weather forecasts, stock market prices, results in betting shops, elections… Let us see a domain which is really familiar to all of us: weather forecast. When we are planning to travel, going somewhere or just deciding what to wear first thing in the morning, we wonder about weather conditions. To make these decisions, we check the weather forecast on TV news, a thermometer, or on a web site. However, sometimes we check several predictions and they do not agree. Which one will be the most accurate? Our goal in this master thesis is to assess the accuracy of these forecasting services in order to help prospective users to choose the best one according to their needs. To do it, we are going to compare forecast predictions with actual real observations

    Training future ML engineers: a project-based course on MLOps

    Get PDF
    Recently, the proliferation of commercial ML-based services has given rise to new job roles, such as ML engineers. Despite being highly sought-after in the job market, ML engineers are difficult to recruit, possibly due to the lack of specialized academic curricula for this position at universities. To address this gap, in the past two years, we have supplemented traditional Computer Science and Data Science university courses with a project-based course on MLOps focused on the fundamental skills required of ML engineers. In this paper, we present an overview of the course by showcasing a couple of sample projects developed by our students. Additionally, we share the lessons learned from offering the course at two different institutions.This work is partially supported by the NRRP Initiative – Next Generation EU ("FAIR - Future Artificial Intelligence Research", code PE00000013, CUP H97G22000210007); the Complementary National Plan PNC-I.1 ("DARE - DigitAl lifelong pRevEntion initiative", code PNC0000002, CUP B53C22006420001), and the project TED2021- 130923B-I00, funded by MCIN/AEI/10.13039/50110001 1033 and the European Union Next Generation EU/PRTR.Peer ReviewedPostprint (author's final draft

    Safe traffic sign recognition through data augmentation for autonomous vehicles software

    Get PDF
    Context: Since autonomous vehicles operate in an open context, their software components, including data-driven ones, have to reliably process inputs (e.g., obtained by cameras) in order to make safe decisions. A key challenge when providing reliable data-driven components is insufficient training data, which could lead to wrong interpretation of the environment, thereby causing accidents. Aim: The goal of our research is to extend available training data of data-driven components for safe autonomous vehicles using the example of traffic sign recognition. Method: We developed an approach to create realistic image augmentations of various quality deficits and applied them on the German traffic sign recognition benchmark dataset (GTSRB). Results: The approach results in images augmented with (any combination of) seven different quality deficits affecting traffic sign recognition (rain, dirt on lens, steam on lens, darkness, motion blur, dirt on sign, backlight) and considers dependencies between combined quality deficits and influences from other contextual information. Conclusion: Our approach can be used to obtain more comprehensive datasets, especially also including samples with quality deficits that are difficult to gather. By structuring the augmentation into a set of basic components, the approach can be adapted for other application domains (e.g., person detection).Peer ReviewedPostprint (author's final draft

    Teaching MLOps in higher education through project-based learning

    Get PDF
    Building and maintaining production-grade ML-enabled components is a complex endeavor that goes beyond the current approach of academic education, focused on the optimization of ML model performance in the lab. In this paper, we present a project-based learning approach to teaching MLOps, focused on the demonstration and experience with emerging practices and tools to automatize the construction of ML-enabled components. We examine the design of a course based on this approach, including laboratory sessions that cover the end-to-end ML component life cycle, from model building to production deployment. Moreover, we report on preliminary results from the first edition of the course. During the present year, an updated version of the same course is being delivered in two independent universities; the related learning outcomes will be evaluated to analyze the effectiveness of project-based learning for this specific subject.This work is partially supported by the project TED2021-130923B-I00, funded by MCIN/AEI/10.13039/501100011033 and the European Union Next Generation EU/PRTR.Peer ReviewedPostprint (author's final draft

    Towards guidelines for building a business case and gathering evidence of software reference architectures in industry

    Get PDF
    Background: Software reference architectures are becoming widely adopted by organizations that need to support the design and maintenance of software applications of a shared domain. For organizations that plan to adopt this architecture-centric approach, it becomes fundamental to know the return on investment and to understand how software reference architectures are designed, maintained, and used. Unfortunately, there is little evidence-based support to help organizations with these challenges. Methods: We have conducted action research in an industry-academia collaboration between the GESSI research group and everis, a multinational IT consulting firm based in Spain. Results: The results from such collaboration are being packaged in order to create guidelines that could be used in similar contexts as the one of everis. The main result of this paper is the construction of empirically-grounded guidelines that support organizations to decide on the adoption of software reference architectures and to gather evidence to improve RA-related practices. Conclusions: The created guidelines could be used by other organizations outside of our industry-academia collaboration. With this goal in mind, we describe the guidelines in detail for their use.Peer ReviewedPostprint (published version

    A reuse-based economic model for software reference architectures

    Get PDF
    The growing size and complexity of software systems, together with critical time-to-market needs, demand new software engineering approaches for software development. To remain competitive, organizations are challenged to make informed and feasible value-driven design decisions in order to ensure the quality of the systems. However, there is a lack of support for evaluating the economic impact of these decisions with regard to software reference architectures. This damages the communication among architects and management, which can result in poor decisions. This paper aims at opening a path in this direction by presenting a pragmatic preliminary economic model to perform cost-benefit analysis on the adoption of software reference architectures as key asset for optimizing architectural decision-making. A preliminary validation based on a retrospective study showed the ability of the model to support a cost-benefit analysis presented to the management of an IT consulting company.Preprin

    Verifying predictive services'quality with Mercury

    Get PDF
    Due to the success of service technology, there are lots of services nowadays that make predictions about the future in domains such as weather forecast, stock market and bookmakers. The value delivered by these predictive services relies on the quality of their predictions. This paper presents Mercury, a tool that measures predictive service quality in the domain of weather forecast, and automates the context-dependent selection of the most accurate predictive service to satisfy a customer query. To do so, candidate predictive services are monitored so that their predictions can be eventually compared with real observations obtained from some trusted source. Mercury is a proof-of-concept to show that the selection of predictive services can be driven by the quality of their predictions. Its service-oriented architecture (SOA) aims to support the easy adaptation to other prediction domains and makes feasible its integration in self-adaptive SOA systems, as well as its direct use by end-users as a classical web application. Thoughout the paper, we show how Mercury was built.Preprin

    Applying project-based learning to teach software analytics and best practices in data science

    Get PDF
    Due to recent industry needs, synergies between data science and software engineering are starting to be present in data science and engineering academic programs. Two synergies are: applying data science to manage the quality of the software (software analytics) and applying software engineering best practices in data science projects to ensure quality attributes such as maintainability and reproducibility. The lack of these synergies on academic programs have been argued to be an educational problem. Hence, it becomes necessary to explore how to teach software analytics and software engineering best practices in data science programs. In this context, we provide hands-on for conducting laboratories applying project-based learning in order to teach software analytics and software engineering best practices to data science students. We aim at improving the software engineering skills of data science students in order to produce software of higher quality by software analytics. We focus in two skills: following a process and software engineering best practices. We apply project-based learning as main teaching methodology to reach the intended outcomes. This teaching experience shows the introduction of project-based learning in a laboratory, where students applied data science and best software engineering practices to analyze and detect improvements in software quality. We carried out a case study in two academic semesters with 63 data science bachelor students. The students found the synergies of the project positive for their learning. In the project, they highlighted both utility of using a CRISP-DM data mining process and best software engineering practices like a software project structure convention applied to a data science project.This paper was partly funded by a teaching innovation project of ICE@UPC-BarcelonaTech (entitled ‘‘Audiovisual and digital material for data engineering, a teaching innovation project with open science’’), and the ‘‘Beatriz Galindo’’ Spanish Program BEA-GAL18/00064.Peer ReviewedPostprint (published version
    • …
    corecore